Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            In widely used sociological descriptions of how accountability is structured through institutions, an “actor” (e.g., the developer) is accountable to a “forum” (e.g., regulatory agencies) empowered to pass judgements on and demand changes from the actor or enforce sanctions. However, questions about structuring accountability persist: why and how is a forum compelled to keep making demands of the actor when such demands are called for? To whom is a forum accountable in the performance of its responsibilities, and how can its practices and decisions be contested? In the context of algorithmic accountability, we contend that a robust accountability regime requires a triadic relationship, wherein the forum is also accountable to another entity: the public(s). Typically, as is the case with environmental impact assessments, public(s) make demands upon the forum's judgements and procedures through the courts, thereby establishing a minimum standard of due diligence. However, core challenges relating to: (1) lack of documentation, (2) difficulties in claiming standing, and (3) struggles around admissibility of expert evidence on and achieving consensus over the workings of algorithmic systems in adversarial proceedings prevent the public from approaching the courts when faced with algorithmic harms. In this paper, we demonstrate that the courts are the primary route—and the primary roadblock—in the pursuit of redress for algorithmic harms. Courts often find algorithmic harms non-cognizable and rarely require developers to address material claims of harm. To address the core challenges of taking algorithms to court, we develop a relational approach to algorithmic accountability that emphasizes not what the actors do nor the results of their actions, but rather how interlocking relationships of accountability are constituted in a triadic relationship between actors, forums, and public(s). As is the case in other regulatory domains, we believe that impact assessments (and similar accountability documentation) can provide the grounds for contestation between these parties, but only when that triad is structured such that the public(s) are able to cohere around shared experiences and interests, contest the outcomes of algorithmic systems that affect their lives, and make demands upon the other parties. Where courts now find algorithmic harms non-cognizable, an impact assessment regime can potentially create procedural rights to protect substantive rights of the public(s). This would require algorithmic accountability policies currently under consideration to provide the public(s) with adequate standing in courts, and opportunities to access and contest the actor's documentation and the forum's judgments.more » « less
- 
            We investigate the privacy practices of labor organizers in the computing technology industry and explore the changes in these practices as a response to remote work. Our study is situated at the intersection of two pivotal shifts in workplace dynamics: (a) the increase in online workplace communications due to remote work, and (b) the resurgence of the labor movement and an increase in collective action in workplaces-especially in the tech industry, where this phenomenon has been dubbed the tech worker movement. The shift of work-related communications to online digital platforms in response to an increase in remote work is creating new opportunities for and risks to the privacy of workers. These risks are especially significant for organizers of collective action, with several well-publicized instances of retaliation against labor organizers by companies. Through a series of qualitative interviews with 29 tech workers involved in collective action, we investigate how labor organizers assess and mitigate risks to privacy while engaging in these actions. Among the most common risks that organizers experienced are retaliation from their employer, lateral worker conflict, emotional burnout, and the possibility of information about the collective effort leaking to management. Depending on the nature and source of the risk, organizers use a blend of digital security practices and community-based mechanisms. We find that digital security practices are more relevant when the threat comes from management, while community management and moderation are central to protecting organizers from lateral worker conflict. Since labor organizing is a collective rather than individual project, individual privacy and collective privacy are intertwined, sometimes in conflict and often mutually constitutive. Notions of privacy that solely center individuals are often incompatible with the needs of organizers, who noted that safety in numbers could only be achieved when workers presented a united front to management. Based on our interviews, we identify key topics for future research, such as the growing prevalence of surveillance software and the needs of international and gig worker organizers.We conclude with design recommendations that can help create safer, more secure and more private tools to better address the risks that organizers face.more » « less
- 
            null (Ed.)Algorithmic impact assessments (AIA) are increasingly being proposed as a mechanism for algorithmic accountability. These assessments are seen as potentially useful for anticipating, avoiding, and mitigating the negative consequences of algorithmic decision-making systems (ADS). At the same time, what an AIA would entail remains under-specified. While promising, AIAs raise as many questions as they answer. Choices about the methods, scope, and purpose of impact assessments structure the possible governance outcomes. Decisions about what type of effects count as an impact, when impacts are assessed, whose interests are considered, who is invited to participate, who conducts the assessment, the public availability of the assessment, and what the outputs of the assessment might be all shape the forms of accountability that AIA proponents seek to encourage. These considerations remain open, and will determine whether and how AIAs can function as a viable governance mechanism in the broader algorithmic accountability toolkit, especially with regard to furthering the public interest. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.more » « less
- 
            null (Ed.)Algorithmic impact assessments (AIAs) are an emergent form of accountability for entities that build and deploy automated decision-support systems. These are modeled after impact assessments in other domains. Our study of the history of impact assessments shows that "impacts" are an evaluative construct that enable institutions to identify and ameliorate harms experienced because of a policy decision or system. Every domain has different expectations and norms about what constitutes impacts and harms, how potential harms are rendered as the impacts of a particular undertaking, who is responsible for conducting that assessment, and who has the authority to act on the impact assessment to demand changes to that undertaking. By examining proposals for AIAs in relation to other domains, we find that there is a distinct risk of constructing algorithmic impacts as organizationally understandable metrics that are nonetheless inappropriately distant from the harms experienced by people, and which fall short of building the relationships required for effective accountability. To address this challenge of algorithmic accountability, and as impact assessments become a commonplace process for evaluating harms, the FAccT community should A) understand impacts as objects constructed for evaluative purposes, B) attempt to construct impacts as close as possible to actual harms, and C) recognize that accountability governance requires the input of various types of expertise and affected communities. We conclude with lessons for assembling cross-expertise consensus for the co-construction of impacts and to build robust accountability relationships.more » « less
- 
            Abstract We measure the molecular gas environment near recent (<100 yr old) supernovae (SNe) using ∼1″ or ≤150 pc resolution CO (2–1) maps from the PHANGS–Atacama Large Millimeter/submillimeter Array (ALMA) survey of nearby star-forming galaxies. This is arguably the first such study to approach the scales of individual massive molecular clouds (Mmol≳ 105.3M⊙). Using the Open Supernova Catalog, we identify 63 SNe within the PHANGS–ALMA footprint. We detect CO (2–1) emission near ∼60% of the sample at 150 pc resolution, compared to ∼35% of map pixels with CO (2–1) emission, and up to ∼95% of the SNe at 1 kpc resolution, compared to ∼80% of map pixels with CO (2–1) emission. We expect the ∼60% of SNe within the same 150 pc beam, as a giant molecular cloud will likely interact with these clouds in the future, consistent with the observation of widespread SN–molecular gas interaction in the Milky Way, while the other ∼40% of SNe without strong CO (2–1) detections will deposit their energy in the diffuse interstellar medium, perhaps helping drive large-scale turbulence or galactic outflows. Broken down by type, we detect CO (2–1) emission at the sites of ∼85% of our 9 stripped-envelope SNe (SESNe), ∼40% of our 34 Type II SNe, and ∼35% of our 13 Type Ia SNe, indicating that SESNe are most closely associated with the brightest CO (2–1) emitting regions in our sample. Our results confirm that SN explosions are not restricted to only the densest gas, and instead exert feedback across a wide range of molecular gas densities.more » « less
- 
            null (Ed.)ABSTRACT When completed, the PHANGS–HST project will provide a census of roughly 50 000 compact star clusters and associations, as well as human morphological classifications for roughly 20 000 of those objects. These large numbers motivated the development of a more objective and repeatable method to help perform source classifications. In this paper, we consider the results for five PHANGS–HST galaxies (NGC 628, NGC 1433, NGC 1566, NGC 3351, NGC 3627) using classifications from two convolutional neural network architectures (RESNET and VGG) trained using deep transfer learning techniques. The results are compared to classifications performed by humans. The primary result is that the neural network classifications are comparable in quality to the human classifications with typical agreement around 70 to 80 per cent for Class 1 clusters (symmetric, centrally concentrated) and 40 to 70 per cent for Class 2 clusters (asymmetric, centrally concentrated). If Class 1 and 2 are considered together the agreement is 82 ± 3 per cent. Dependencies on magnitudes, crowding, and background surface brightness are examined. A detailed description of the criteria and methodology used for the human classifications is included along with an examination of systematic differences between PHANGS–HST and LEGUS. The distribution of data points in a colour–colour diagram is used as a ‘figure of merit’ to further test the relative performances of the different methods. The effects on science results (e.g. determinations of mass and age functions) of using different cluster classification methods are examined and found to be minimal.more » « less
- 
            ABSTRACT Understanding the spatial distribution of metals within galaxies allows us to study the processes of chemical enrichment and mixing in the interstellar medium. In this work, we map the 2D distribution of metals using a Gaussian Process Regression (GPR) for 19 star-forming galaxies observed with the Very Large Telescope/Multi Unit Spectroscopic Explorer (VLT–MUSE) as a part of the PHANGS–MUSE survey. We find that 12 of our 19 galaxies show significant 2D metallicity variation. Those without significant variations typically have fewer metallicity measurements, indicating this is due to the dearth of $${\rm H\, {\small II}}$$ regions in these galaxies, rather than a lack of higher-order variation. After subtracting a linear radial gradient, we see no enrichment in the spiral arms versus the disc. We measure the 50 per cent correlation scale from the two-point correlation function of these radially subtracted maps, finding it to typically be an order of magnitude smaller than the fitted GPR kernel scale length. We study the dependence of the two-point correlation scale length with a number of global galaxy properties. We find no relationship between the 50 per cent correlation scale and the overall gas turbulence, in tension with existing theoretical models. We also find more actively star-forming galaxies, and earlier type galaxies have a larger 50 per cent correlation scale. The size and stellar mass surface density do not appear to correlate with the 50 per cent correlation scale, indicating that perhaps the evolutionary state of the galaxy and its current star formation activity is the strongest indicator of the homogeneity of the metal distribution.more » « less
- 
            Abstract We compare mid-infrared (mid-IR), extinction-corrected H α , and CO (2–1) emission at 70–160 pc resolution in the first four PHANGS–JWST targets. We report correlation strengths, intensity ratios, and power-law fits relating emission in JWST’s F770W, F1000W, F1130W, and F2100W bands to CO and H α . At these scales, CO and H α each correlate strongly with mid-IR emission, and these correlations are each stronger than the one relating CO to H α emission. This reflects that mid-IR emission simultaneously acts as a dust column density tracer, leading to a good match with the molecular-gas-tracing CO, and as a heating tracer, leading to a good match with the H α . By combining mid-IR, CO, and H α at scales where the overall correlation between cold gas and star formation begins to break down, we are able to separate these two effects. We model the mid-IR above I ν = 0.5 MJy sr −1 at F770W, a cut designed to select regions where the molecular gas dominates the interstellar medium (ISM) mass. This bright emission can be described to first order by a model that combines a CO-tracing component and an H α -tracing component. The best-fitting models imply that ∼50% of the mid-IR flux arises from molecular gas heated by the diffuse interstellar radiation field, with the remaining ∼50% associated with bright, dusty star-forming regions. We discuss differences between the F770W, F1000W, and F1130W bands and the continuum-dominated F2100W band and suggest next steps for using the mid-IR as an ISM tracer.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available